Goto

Collaborating Authors

 gaussian markov random field



Globally Optimal Learning for Structured Elliptical Losses

Neural Information Processing Systems

Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss. In structured problems, however, where there are multiple labels and structural constraints on the labels are imposed (or learned), robust optimization is challenging, and more often than not the loss used is simply the negative log-likelihood of a Gaussian Markov random field. Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss.


Efficient methods for Gaussian Markov random fields under sparse linear constraints

Neural Information Processing Systems

Methods for inference and simulation of linearly constrained Gaussian Markov Random Fields (GMRF) are computationally prohibitive when the number of constraints is large. In some cases, such as for intrinsic GMRFs, they may even be unfeasible. We propose a new class of methods to overcome these challenges in the common case of sparse constraints, where one has a large number of constraints and each only involves a few elements. Our methods rely on a basis transformation into blocks of constrained versus non-constrained subspaces, and we show that the methods greatly outperform existing alternatives in terms of computational cost. By combining the proposed methods with the stochastic partial differential equation approach for Gaussian random fields, we also show how to formulate Gaussian process regression with linear constraints in a GMRF setting to reduce computational cost. This is illustrated in two applications with simulated data.


Colored Markov Random Fields for Probabilistic Topological Modeling

Marinucci, Lorenzo, Di Nino, Leonardo, D'Acunto, Gabriele, Pandolfo, Mario Edoardo, Di Lorenzo, Paolo, Barbarossa, Sergio

arXiv.org Machine Learning

Probabilistic Graphical Models (PGMs) encode conditional dependencies among random variables using a graph -nodes for variables, links for dependencies- and factorize the joint distribution into lower-dimensional components. This makes PGMs well-suited for analyzing complex systems and supporting decision-making. Recent advances in topological signal processing highlight the importance of variables defined on topological spaces in several application domains. In such cases, the underlying topology shapes statistical relationships, limiting the expressiveness of canonical PGMs. To overcome this limitation, we introduce Colored Markov Random Fields (CMRFs), which model both conditional and marginal dependencies among Gaussian edge variables on topological spaces, with a theoretical foundation in Hodge theory. CMRFs extend classical Gaussian Markov Random Fields by including link coloring: connectivity encodes conditional independence, while color encodes marginal independence. We quantify the benefits of CMRFs through a distributed estimation case study over a physical network, comparing it with baselines with different levels of topological prior.


Simplicial Gaussian Models: Representation and Inference

Marinucci, Lorenzo, D'Acunto, Gabriele, Di Lorenzo, Paolo, Barbarossa, Sergio

arXiv.org Machine Learning

Thus, they are widely used in several applications, including computer vision, computational biology, and spatial statistics [2, 3, 4]. In a PGM, random variables are associated with the vertices of a graph, while edges encode statistical dependencies. The meaning of the edges depend on the graph type: Bayesian Networks capture directional dependencies through directed acyclic graphs (DAGs) [5], whereas Markov Random Fields (MRFs) model symmetric conditional dependencies with undirected graphs, thanks to the Markov property [6]. A well-studied family is Gaussian Markov Random Fields (GMRFs), i.e., MRFs that model Gaussian random variables [7]. Indeed, conditional dependencies in the Gaussian distribution are encoded by the precision matrix, thus allowing to learn GMRF from data with efficient algorithms [8]. However, PGMs are inherently limited to graphs. First, PGMs typically associate random variables with individual nodes (sets of cardinality one), while in many settings random quantities naturally relates with larger sets. Examples include data traffic in communication networks or water flows in distribution networks, where measurements are collected on the links of the networks [9, 10, 11]. Second, PGMs are restricted to modeling pairwise dependencies via edges.


Multitask Learning with Learned Task Relationships

Wan, Zirui, Vlaski, Stefan

arXiv.org Artificial Intelligence

Classical consensus-based strategies for federated and decentralized learning are statistically suboptimal in the presence of heterogeneous local data or task distributions. As a result, in recent years, there has been growing interest in multitask or personalized strategies, which allow individual agents to benefit from one another in pursuing locally optimal models without enforcing consensus. Existing strategies require either precise prior knowledge of the underlying task relationships or are fully non-parametric and instead rely on meta-learning or proximal constructions. In this work, we introduce an algorithmic framework that strikes a balance between these extremes. By modeling task relationships through a Gaussian Markov Random Field with an unknown precision matrix, we develop a strategy that jointly learns both the task relationships and the local models, allowing agents to self-organize in a way consistent with their individual data distributions. Our theoretical analysis quantifies the quality of the learned relationship, and our numerical experiments demonstrate its practical effectiveness.


Deep Gaussian Markov Random Fields for Graph-Structured Dynamical Systems

Neural Information Processing Systems

Similar problems occur also in the geosciences, ecology, neuroscience, epidemiology, or transportation systems, where animal movements, the spread of diseases, or traffic load need to be estimated from imperfect data to facilitate scientific discovery and decision-making.


Multi-Component VAE with Gaussian Markov Random Field

Oubari, Fouad, El-Baha, Mohamed, Meunier, Raphael, Décatoire, Rodrigue, Mougeot, Mathilde

arXiv.org Artificial Intelligence

Multi-component datasets with intricate dependencies, like industrial assemblies or multi-modal imaging, challenge current generative modeling techniques. Existing Multi-component Variational AutoEncoders typically rely on simplified aggregation strategies, neglecting critical nuances and consequently compromising structural coherence across generated components. To explicitly address this gap, we introduce the Gaussian Markov Random Field Multi-Component Variational AutoEncoder , a novel generative framework embedding Gaussian Markov Random Fields into both prior and posterior distributions. This design choice explicitly models cross-component relationships, enabling richer representation and faithful reproduction of complex interactions. Empirically, our GMRF MCVAE achieves state-of-the-art performance on a synthetic Copula dataset specifically constructed to evaluate intricate component relationships, demonstrates competitive results on the PolyMNIST benchmark, and significantly enhances structural coherence on the real-world BIKED dataset. Our results indicate that the GMRF MCVAE is especially suited for practical applications demanding robust and realistic modeling of multi-component coherence



Fast approximative estimation of conditional Shapley values when using a linear regression model or a polynomial regression model

Aanes, Fredrik Lohne

arXiv.org Machine Learning

We develop a new approximative estimation method for conditional Shapley values obtained using a linear regression model. We develop a new estimation method and outperform existing methodology and implementations. Compared to the sequential method in the shapr-package (i.e fit one and one model), our method runs in minutes and not in hours. Compared to the iterative method in the shapr-package, we obtain better estimates in less than or almost the same amount of time. When the number of covariates becomes too large, one can still fit thousands of regression models at once using our method. We focus on a linear regression model, but one can easily extend the method to accommodate several types of splines that can be estimated using multivariate linear regression due to linearity in the parameters.